8 research outputs found

    Reachable by walking: inappropriate integration of near and far space may lead to distance errors

    Get PDF
    Our experimental results show that infants while learning to walk intend to reach for unreachable objects. These distance errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. Infants during their first months are fairly immobile, their attention and actions are constrained to near (reachable) space. Walking, in contrast, lures attention to distal displays and provides the information to disambiguate far space. In this paper, we make use of a reward-mediated learning to mimic the development of absolute distance perception. The results obtained with the NAO robot support further our hypothesis that the representation of near space changes after the onset of walking, which may cause the occurrence of distance errors

    Reaching for the Unreachable: Reorganization of Reaching with Walking

    Get PDF
    Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared non-walkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Non-walkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a rewardmediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking

    Integration of Static and Self-motion-Based Depth Cues for Efficient Reaching and Locomotor Actions

    Get PDF
    The common approach to estimate the distance of an object in computer vision and robotics is to use stereo vision. Stereopsis, however, provides good estimates only within near space and thus is more suitable for reaching actions. In order to successfully plan and execute an action in far space, other depth cues must be taken into account. Self-body movements, such as head and eye movements or locomotion can provide rich information of depth. This paper proposes a model for integration of static and self-motion-based depth cues for a humanoid robot. Our results show that self-motion-based visual cues improve the accuracy of distance perception and combined with other depth cues provide the robot with a robust distance estimator suitable for both reaching and walking actions

    Reachable by walking: inappropriate integration of near and far space may lead to distance errors

    Get PDF
    Our experimental results show that infants while learning to walk intend to reach for unreachable objects. These distance errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. Infants during their first months are fairly immobile, their attention and actions are constrained to near (reachable) space. Walking, in contrast, lures attention to distal displays and provides the information to disambiguate far space. In this paper, we make use of a reward-mediated learning to mimic the development of absolute distance perception. The results obtained with the NAO robot support further our hypothesis that the representation of near space changes after the onset of walking, which may cause the occurrence of distance errors

    1 Pose Estimation through Cue Integration: a Neuroscience-Inspired Approach

    Get PDF
    Abstract—Primates possess a superior ability in dealing with objects in their environment. One of the keys for achieving such ability is the continuous concurrent use of multiple cues, especially of visual nature. This work is aimed at improving the skills of robotic systems in their interaction with nearby objects. The basic idea is to improve visual estimation of objects in the world through the merging of different visual cues of the same stimuli. A computational model of stereoptic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the integration of monocular and binocular cues can make robot sensory systems more reliable and versatile. I

    A 3D grasping system based on multimodal visual and tactile processing

    No full text
    urpose – The purpose of this paper is to present a novel multimodal approach to the problem of planning and performing a reliable grasping action on unmodeled objects. Design/methodology/approach – The robotic system is composed of three main components. The first is a conceptual manipulation framework based on grasping primitives. The second component is a visual processing module that uses stereo images and biologically inspired algorithms to accurately estimate pose, size, and shape of an unmodeled target object. A grasp action is planned and executed by the third component of the system, a reactive controller that uses tactile feedback to compensate possible inaccuracies and thus complete the grasp even in difficult or unexpected conditions. Findings – Theoretical analysis and experimental results have shown that the proposed approach to grasping based on the concurrent use of complementary sensory modalities, is very promising and suitable even for changing, dynamic environments. Research limitations/implications – Additional setups with more complicate shapes are being investigated, and each module is being improved both in hardware and software. Originality/value – This paper introduces a novel, robust, and flexible grasping system based on multimodal integration
    corecore